Goto

Collaborating Authors

 lipschitz continuity


Lipschitz bounds for integral kernels

Reverdi, Justin, Zhang, Sixin, Gamboa, Fabrice, Gratton, Serge

arXiv.org Machine Learning

Feature maps associated with positive definite kernels play a central role in kernel methods and learning theory, where regularity properties such as Lipschitz continuity are closely related to robustness and stability guarantees. Despite their importance, explicit characterizations of the Lipschitz constant of kernel feature maps are available only in a limited number of cases. In this paper, we study the Lipschitz regularity of feature maps associated with integral kernels under differentiability assumptions. We first provide sufficient conditions ensuring Lipschitz continuity and derive explicit formulas for the corresponding Lipschitz constants. We then identify a condition under which the feature map fails to be Lipschitz continuous and apply these results to several important classes of kernels. For infinite width two-layer neural network with isotropic Gaussian weight distributions, we show that the Lipschitz constant of the associated kernel can be expressed as the supremum of a two-dimensional integral, leading to an explicit characterization for the Gaussian kernel and the ReLU random neural network kernel. We also study continuous and shift-invariant kernels such as Gaussian, Laplace, and Matérn kernels, which admit an interpretation as neural network with cosine activation function. In this setting, we prove that the feature map is Lipschitz continuous if and only if the weight distribution has a finite second-order moment, and we then derive its Lipschitz constant. Finally, we raise an open question concerning the asymptotic behavior of the convergence of the Lipschitz constant in finite width neural networks. Numerical experiments are provided to support this behavior.


Shuffling the Stochastic Mirror Descent via Dual Lipschitz Continuity and Kernel Conditioning

Qiu, Junwen, Mei, Leilei, Zhang, Junyu

arXiv.org Machine Learning

The global Lipschitz smoothness condition underlies most convergence and complexity analyses via two key consequences: the descent lemma and the gradient Lipschitz continuity. How to study the performance of optimization algorithms in the absence of Lipschitz smoothness remains an active area. The relative smoothness framework from Bauschke-Bolte-Teboulle (2017) and Lu-Freund-Nesterov (2018) provides an extended descent lemma, ensuring convergence of Bregman-based proximal gradient methods and their vanilla stochastic counterparts. However, many widely used techniques (e.g., momentum schemes, random reshuffling, and variance reduction) additionally require the Lipschitz-type bound for gradient deviations, leaving their analysis under relative smoothness an open area. To resolve this issue, we introduce the dual kernel conditioning (DKC) regularity condition to regulate the local relative curvature of the kernel functions. Combined with the relative smoothness, DKC provides a dual Lipschitz continuity for gradients: even though the gradient mapping is not Lipschitz in the primal space, it preserves Lipschitz continuity in the dual space induced by a mirror map. We verify that DKC is widely satisfied by popular kernels and is closed under affine composition and conic combination. With these novel tools, we establish the first complexity bounds as well as the iterate convergence of random reshuffling mirror descent for constrained nonconvex relative smooth problems.


Sinkhorn Barycenters with Free Support via Frank-Wolfe Algorithm

Giulia Luise, Saverio Salzo, Massimiliano Pontil, Carlo Ciliberto

Neural Information Processing Systems

We present a novel algorithm to estimate the barycenter of arbitrary probability distributions with respect to the Sinkhorn divergence. Based on a Frank-Wolfe optimization strategy, our approach proceeds by populating the support of the barycenter incrementally, without requiring any pre-allocation.






TowardsSharperGeneralizationBoundsfor StructuredPrediction

Neural Information Processing Systems

Specifically,inPAC-Bayesian approach, [45,26,4,22]provide the generalization bounds of order O( 1 n). In implicit embedding approach, [12, 13, 52, 11, 58, 7] provide the convergence rate of orderO( 1n1/4), and [53] of orderO( 1 n). In the factor graph decomposition approach, [18, 51] present the generalization upper bounds of orderO( 1 n).


fef6f971605336724b5e6c0c12dc2534-Supplemental.pdf

Neural Information Processing Systems

I W scalars. Taking an expectation on both sides of (17) we obtain { } The next lemma characterizes the spectral properties of the disagreement matrix, used in Lemma 4. W is also a stochastic matrix. W are that of I W, each with multiplicity K. W) = 1 with multiplicity K. Again we can check that the eigenspace of ( λ We prove this result by induction on n. For n = 1 it is trivial. Now assume that the inequality holds for all l n 1. We provide the proof here for completeness.